# English text generation

Brtgpt 124m Base
BrtGPT-124M-Base is a basic model pre - trained on a large amount of English corpora and can be used for free. It solves the problems of cumbersome use of open - source models and high requirements for processing power.
Large Language Model Transformers
B
Bertug1911
2,128
1
Uzmi Gpt
Apache-2.0
GPT-2 is an open-source language model developed by OpenAI, based on the Transformer architecture, capable of generating coherent text.
Large Language Model English
U
rajan3208
30
2
Ice0.101 20.03 RP GRPO 1
Apache-2.0
A Mist model optimized with Unsloth lazy-free framework and Huggingface TRL training library, achieving 2x training efficiency
Large Language Model Transformers English
I
icefog72
55
2
Qwq 32B Bnb 4bit
Apache-2.0
The 4-bit quantized version of Qwen/QwQ-32B, implemented using the BitsAndBytes library, suitable for text generation tasks in resource-constrained environments.
Large Language Model Transformers English
Q
fantos
115
4
Nano R1 Model
Apache-2.0
Optimized Qwen2 model based on Unsloth and Huggingface TRL library, achieving 2x inference speed improvement
Large Language Model Transformers English
N
Mansi-30
25
2
Mistral Small 24B Instruct 2501 GPTQ G128 W4A16 MSE
Apache-2.0
This is the 4-bit quantized version of the mistralai/Mistral-Small-24B-Instruct-2501 model, quantized by ConfidentialMind.com, achieving a smaller and faster model with minimal performance loss.
Large Language Model English
M
ConfidentialMind
93
1
Model
Apache-2.0
This is a fine-tuned Phi-4 model that achieves 2x training acceleration through the Unsloth and TRL libraries and focuses on text generation tasks.
Large Language Model Transformers English
M
namrateshInfra
101
1
Gguf Q5 K M NanoLM 1B Instruct V2
Gpl-3.0
This is a GGUF format model converted from NanoLM-1B-Instruct-v2, suitable for text generation tasks.
Large Language Model English
G
Felladrin
49
1
Mini Magnum 12b V1.1 GGUF
Other
Mini-Magnum-12B-V1.1 is a text generation model built on the basis of the intervitens/mini-magnum-12b-v1.1 base model, supporting English and adopting a specific quantization method.
Large Language Model English
M
Reiterate3680
252
2
Smollm 135M 4bit
Apache-2.0
This is a 4-bit quantized 135M parameter small language model, suitable for text generation tasks in resource-constrained environments.
Large Language Model Transformers English
S
mlx-community
312
1
Llmc Gpt2 774M 150B
MIT
This is a 774M-parameter language model based on the GPT-2 architecture, trained on 150 billion tokens from the FineWeb dataset.
Large Language Model Transformers English
L
mdouglas
18
1
Llama 3 NeuralPaca 8b
An optimized model based on Meta LLAMA-3-8B, trained using lazy-free optimization techniques and the Huggingface TRL library, achieving 2x speed improvement
Large Language Model Transformers English
L
NeuralNovel
21
7
K2
Apache-2.0
K2 is a large language model with 65 billion parameters, surpassing Llama 2 70B with a 35% computational advantage through a fully transparent training approach.
Large Language Model Transformers English
K
LLM360
109
89
Retnet 1.3B 100B
MIT
A text generation model trained on the SlimPajama-627B dataset, utilizing the retina network architecture.
Large Language Model Safetensors Supports Multiple Languages
R
fla-hub
57
1
Microllama
Apache-2.0
MicroLlama is a 300-million-parameter Llama model pretrained by individual developer keeeeenw within a $500 budget, focusing on English text generation tasks.
Large Language Model Transformers English
M
keeeeenw
2,955
46
Gemma 1.1 7b It
Gemma is a lightweight open model series launched by Google, built with the same technology as Gemini, suitable for text generation tasks.
Large Language Model Transformers
G
google
17.43k
271
Matter 0.1 7B GGUF
Apache-2.0
Matter 7B is a fine-tuned model based on Mistral 7B, designed for text generation tasks, supporting conversational interaction and function calling.
Large Language Model English
M
munish0838
127
1
Mixtral Chat 7b
MIT
This is a hybrid model created by merging multiple Mistral-7B variant models using the mergekit tool, focusing on text generation tasks.
Large Language Model English
M
LeroyDyer
76
2
Ministral 4b Instruct
Apache-2.0
Ministral is a GPT-like model based on 4 billion parameters, adopting the same architecture as the Mistral model but on a smaller scale, primarily designed for English text generation tasks.
Large Language Model Transformers English
M
ministral
151
5
Lzlv Limarpv3 L2 70b GGUF
This is a static quantization version of the Doctor-Shotgun/lzlv-limarpv3-l2-70b model, offering multiple quantization options to suit different needs.
Large Language Model English
L
mradermacher
67
3
Daringmaid 13B
Daring Maid-13B is a smarter and more instruction-following capable version of Noromaid, created by integrating features from multiple excellent models.
Large Language Model Transformers English
D
Kooten
76
15
Rose 20B GGUF
Rose 20B is a 20B-parameter large language model based on the LLaMA architecture, adopting Alpaca-style instruction templates, suitable for text generation tasks.
Large Language Model English
R
TheBloke
612
27
Tinyllama 1.1B Chat V0.4 GGUF
Apache-2.0
TinyLlama-1.1B is a compact large language model with 1.1 billion parameters, based on the Llama 2 architecture, optimized for computation and memory-constrained scenarios.
Large Language Model English
T
afrideva
65
4
Tinymistral 248M GGUF
Apache-2.0
TinyMistral-248M is a small language model based on pre-training from the Mistral 7B model, with parameters scaled down to approximately 248 million, primarily used for fine-tuning downstream tasks.
Large Language Model English
T
afrideva
211
5
Opus V0 7B GGUF
Opus V0 7B is a 7B-parameter language model based on the Mistral architecture, developed by DreamGen, focusing on text generation tasks.
Large Language Model English
O
TheBloke
2,467
13
Tinyllama 1.1B Alpaca Chat V1.5 GGUF
Apache-2.0
A lightweight conversational model fine-tuned based on TinyLlama-1.1B, trained using the Alpaca dataset, suitable for English text generation tasks
Large Language Model English
T
afrideva
44
2
Yarn Mistral 7B 128k AWQ
Apache-2.0
Yarn Mistral 7B 128K is an advanced language model optimized for long-context processing, further pre-trained on long-context data using the YaRN extension method, supporting a 128k token context window.
Large Language Model Transformers English
Y
TheBloke
483
72
Mistral 7b Guanaco
Apache-2.0
A pre-trained language model based on the Llama2 architecture, suitable for English text generation tasks
Large Language Model Transformers English
M
kingabzpro
67
3
Gpt2 Demo
Other
GPT-2 is a self-supervised pre-trained language model based on the Transformer architecture, which excels at text generation tasks.
Large Language Model Transformers
G
demo-leaderboard
19.21k
1
Tinystories Gpt2 3M
This is a small GPT-2 model pre-trained on the TinyStories V2 dataset, featuring 3M trainable parameters and demonstrating good text generation coherence.
Large Language Model Transformers English
T
calum
637
7
Tinyllama 42M Fp32
MIT
This is a 42M-parameter Llama 2 architecture float32 precision model trained on the TinyStories dataset, suitable for simple text generation tasks.
Large Language Model Transformers
T
nickypro
517
3
Instruct Llama70B Dolly15k
An instruction-following model fine-tuned based on Llama-2-70B, trained using the Dolly15k dataset, suitable for English text generation tasks.
Large Language Model Transformers English
I
Brillibits
114
1
Phi Hermes 1.3B
Other
Phi-1.5 model fine-tuned on the Hermes dataset, primarily used for text generation tasks
Large Language Model Transformers English
P
teknium
45
44
Qcammel 70 X GGUF
Other
qCammel 70 is a large language model based on the Llama 2 architecture, developed by augtoma and quantized by TheBloke. This model focuses on text generation tasks, offering multiple quantization versions to suit different hardware requirements.
Large Language Model English
Q
TheBloke
1,264
4
Pile T5 Large
Pile-T5 Large is an encoder-decoder model trained on The Pile dataset based on the T5x library, primarily used for English text-to-text generation tasks.
Large Language Model Transformers English
P
EleutherAI
112
15
Llama 2 70b Hf
Llama 2 is an open-source large language model series developed by Meta, ranging from 7B to 70B parameters, supporting English text generation tasks.
Large Language Model Transformers English
L
meta-llama
33.86k
849
Cerebras GPT 2.7B
Apache-2.0
Cerebras-GPT 2.7B is a language model based on the Transformer architecture, aiming to support the research of large language models and can serve as a basic model in fields such as natural language processing.
Large Language Model Transformers English
C
cerebras
269
44
Cerebras GPT 590M
Apache-2.0
Cerebras-GPT 590M is a language model based on the Transformer architecture and belongs to the Cerebras-GPT model family. It aims to study the scaling laws of large language models and demonstrate the simplicity and scalability of training large language models on the Cerebras software and hardware stack.
Large Language Model Transformers English
C
cerebras
2,430
21
Pythia 2.8b
Apache-2.0
Pythia - 2.8 billion is a member of the scalable language model suite developed by EleutherAI, designed specifically to promote the interpretability research of large language models. This model is based on the Transformer architecture and is trained on the The Pile dataset, with 2.8 billion parameters.
Large Language Model Transformers English
P
EleutherAI
40.38k
30
Pythia 160m
Apache-2.0
Pythia-160M is a language model dedicated to interpretability research developed by EleutherAI. It belongs to the 160M parameter scale version in the Pythia suite and is based on the Transformer architecture, trained on the Pile dataset.
Large Language Model Transformers English
P
EleutherAI
163.75k
31
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase